Skip to content

Fix STAMP Build / Update Installation Instruction#93

Merged
FWao merged 39 commits intomainfrom
fix/build
Jul 21, 2025
Merged

Fix STAMP Build / Update Installation Instruction#93
FWao merged 39 commits intomainfrom
fix/build

Conversation

@FWao
Copy link
Copy Markdown
Member

@FWao FWao commented Jul 18, 2025

This PR enables to install STAMP with all slide encoders on CPU-only and CUDA systems.

With #75 it happened that dependencies like flash-attn, causal-conv1d or mamba-ssm were installed / compiled using a different version of PyTorch than in the environment which caused errors.

Some changes in this PR are temporary measurements to simplify the installation process. They can be reverted as soon as the underlying issues have been resolved.

The triton version has been fixed until this is resolved. This caused torch to downgrade to 2.6.0. As the flash-attn wheels for 2.6.0 are broken for my setups, I had to make sure to force flash-attn to build. Unfortunately the no-binary-package statement in the pyproject.toml file does not ensure that. That is the reason for the flash-attention fork. Can be reverted as soon as fixed.

This PR bumps the default python version to 3.12 (but is still compatible with 3.11).

  • Update the uv install URLs before merging this PR

closes #81

Follow-up: #94 (publish PyPI package)

FWao and others added 30 commits July 18, 2025 10:59
- make validation and testing more efficient
- improve lightning_model code
- make alibi mask optional when there no attention mask is passed as parameter
- add tutorial on how to use slide and patient encoding
- set chief as default encoder
- update readme for new installation steps with uv
@FWao FWao requested a review from EzicStar July 18, 2025 16:52
@FWao FWao self-assigned this Jul 18, 2025
@georg-wolflein
Copy link
Copy Markdown
Contributor

Wow, thanks for fixing this Fabi!

@EzicStar
Copy link
Copy Markdown
Contributor

EzicStar commented Jul 20, 2025

Wow!!! Great job Fabi. The installation steps are clear and also the PR description. Unfortunately, I reproduced the steps for installation using the repository with gpu (updated uv and ran the commands one by one) and the models that require flash-attention fail:

    # Copyright (c) 2023, Tri Dao.
    
    from typing import Optional, Sequence, Tuple, Union
    
    import torch
    import torch.nn as nn
    import os
    
    # isort: off
    # We need to import the CUDA kernels after importing torch
    USE_TRITON_ROCM = os.getenv("FLASH_ATTENTION_TRITON_AMD_ENABLE", "FALSE") == "TRUE"
    if USE_TRITON_ROCM:
        from .flash_attn_triton_amd import interface_fa as flash_attn_gpu
    else:
>       import flash_attn_2_cuda as flash_attn_gpu
E       ImportError: /mnt/bulk-sirius/juan/pap_screening/STAMP/.venv/lib/python3.12/site-packages/flash_attn_2_cuda.cpython-312-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

.venv/lib/python3.12/site-packages/flash_attn/flash_attn_interface.py:15: ImportError

Tried it on sirius, on this branch, removed previous .venv and ran uv sync step with --refresh. Also took around 20 minutes to build flash-attn. Hope this info helps!

Copy link
Copy Markdown
Contributor

@EzicStar EzicStar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you so much for fixing this :)

@FWao FWao merged commit 257a2df into main Jul 21, 2025
28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Remove Ubuntu openslide-tools dependency from README

3 participants